Linux Soft DMA Driver 您所在的位置:网站首页 xilinx vdma linux Linux Soft DMA Driver

Linux Soft DMA Driver

2023-09-13 00:39| 来源: 网络整理| 查看: 265

This page covers the Linux driver for the Xilinx Soft DMA IPs, including AXI DMA, AXI CDMA, AXI MCMDA and AXI VDMA for Zynq, Zynq Ultrascale+ MPSoC, Versal and Microblaze.

Table of Contents

IntroductionThe Soft IP DMA (AXI DMA/CDMA/MCDMA/VDMA) driver is available as part of the Xilinx Linux distribution and in open source Linux as drivers/dma/xilinx_dma.c

HW IP featuresAXI DMAThe AXI Direct Memory Access (AXI DMA) IP provides high-bandwidth direct memory access between the AXI4 memory mapped and AXI4-Stream-type target peripherals. Its optional scatter-gather capabilities also offload data movement tasks from the Central Processing Unit (CPU) in processor-based systems. Initialization, status, and management registers are accessed through an AXI4-Lite slave interface.AXI4 and AXI4-Stream compliantOptional Scatter/Gather (SG) DMA support. When Scatter/gather mode is not selected the IP operates in Simple DMA mode.Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bitsOptional Data Re-Alignment EngineOptional AXI Control and Status StreamsOptional Keyhole supportOptional Micro DMA mode supportSupport for up to 64-bit AddressingFeatures supported in the driverOptional Scatter/Gather (SG) DMA support. When Scatter/gather mode is not selected the IP operates in Simple DMA mode.Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bitsOptional Data Re-Alignment EngineOptional AXI Control and Status Streams64-bit Addressing supportNote: Multi-channel mode is no longer a supported mode of operation for AXI DMA.

AXI CDMAThe AXI CDMA provides high-bandwidth direct memory access (DMA) between a memory mapped source address and a memory mapped destination address using the AXI4 protocol. An optional Scatter Gather (SG) feature can be used to offload control and sequencing tasks from the System CPU. Initialization, status, and control registers are accessed through an AXI4-Lite slave interface.AXI4 CompliantPrimary AXI Memory Map data width support of 32, 64, 128, and 256 bitsPrimary AXI Stream data width support of 8, 16, 32, 64, 128, and 256 bitsOptional Data Re-Alignment EngineOptional Gen-Lock SynchronizationIndependent, asynchronous channel operationProvides Simple DMA only mode and an optional hybrid mode supporting both Simple DMA and Scatter-Gather automationOptional Store and Forward operation mode with internal Data FIFO (First In First Out)Features supported in the driverOptional Scatter/Gather (SG) DMA support. When Scatter/gather mode is not selected the IP operates in Simple DMA mode.Primary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bitsOptional Data Re-Alignment Engine64-bit Addressing SupportSimple DMA modeScatter-Gather DMA modeAXI VDMAThe AXI Video Direct Memory Access (AXI VDMA) core is a soft Xilinx IP core that provides high-bandwidth direct memory access between memory and AXI4-Stream type video target peripherals. The core provides efficient two dimensional DMA operations with independent asynchronous read and write channel operation. Initialization, status, interrupt and management registers are accessed through an AXI4-Lite slave interface.High-bandwidth direct memory access for video streamsEfficient two-dimensional DMA operationsIndependent, asynchronous read and write channel operationGen-Lock frame buffer synchronizationSupports a maximum of 32 frame buffersSupports dynamic video format changesConfigurable Burst Size and Line Buffer depth for efficient video streamingProcessor accessible initialization, status, interrupt and management registersPrimary AXI Stream data width support for multiples of 8-bits: 8, 16, 24, 32, etc. up to 1024 bits64-bit AddressingFeatures supported in the driverSupport for maximum 32 frame buffers64-bit AddressingGen-lock frame buffer synchronization

AXI MCMDA

The AXI Multichannel Direct Memory Access (AXI MCDMA) core is a soft Xilinx IP core for use with the Xilinx Vivado Design Suite. The AXI MCDMA provides high-bandwidth direct memory access between memory and AXI4-Stream target peripherals. The AXI MCDMA core provides Scatter Gather (SG) interface with multiple channel support with independent configuration

AXI4 data width support of 32, 64, 128, 256, 512, and 1,024 bitsAXI4-Stream data width support of 8, 16, 32, 64, 128, 256, 512, and 1,024 bitsSupports up to 16 independent channelsSupports per Channel Interrupt outputSupports data realignment engine (DRE) alignment for streaming data width of up to 512 bitsSupports up to 64 MB transfer per Buffer Descriptor (BD)Optional AXI4-Stream Control and Status StreamsFeatures supported in the driverPrimary AXI4 Memory Map and AXI4-Stream data width support of 32, 64, 128, 256, 512, and 1024 bits64-bit Addressing SupportScatter-Gather DMA mode on all supported 16 S2MM and MM2S channels.64 MB transfer per Buffer Descriptor (BD)

Missing Features and Known Issues/Limitations in DriverAXI DMANo support for Keyhole featureAXI CDMANoneAXI VDMAConfigurable Burst Size and Line Buffer depth for efficient video streamingKernel ConfigurationThe following config options should be enabled in order to build SoftIP DMA'S(AXI DMA/CDMA/VDMA/MCMDA) driverCONFIG_DMADEVICESCONFIG_XILINX_DMA

The driver is available at,https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/xilinx_dma.c

DevicetreeThe device tree node for AXI DMA/CDMA/MCMDA/VDMA will be automatically generated, if the core is configured in the HW design, using the Device Tree BSP.

Steps to generate device-tree is documented here,http://www.wiki.xilinx.com/Build+Device+Tree+Blob

And a sample binding is shown below and the description of DT property is documented here

AXI DMA

axi_dma_1: dma@40400000 { #dma-cells = ; clock-names = "s_axi_lite_aclk", "m_axi_sg_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk"; clocks = , , , ; compatible = "xlnx,axi-dma-1.00.a"; interrupt-parent = ; interrupts = ; reg = ; xlnx,addrwidth = ; xlnx,include-sg ; dma-channel@40400000 { compatible = "xlnx,axi-dma-mm2s-channel"; dma-channels = ; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,include-dre ; }; dma-channel@40400030 { compatible = "xlnx,axi-dma-s2mm-channel"; dma-channels = ; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,include-dre ; }; };

AXI CDMA

axi_cdma_0: dma@4e200000 { #dma-cells = ; clock-names = "s_axi_lite_aclk", "m_axi_aclk"; clocks = , ; compatible = "xlnx,axi-cdma-1.00.a"; interrupt-parent = ; interrupts = ; reg = ; xlnx,addrwidth = ; xlnx,include-sg ; dma-channel@4e200000 { compatible = "xlnx,axi-cdma-channel"; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,include-dre ; xlnx,max-burst-len = ; }; }; AXI VDMA axi_vdma_0: dma@43000000 { #dma-cells = ; clock-names = "s_axi_lite_aclk", "m_axi_mm2s_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk", "m_axi_s2mm_aclk"; clocks = , , , , ; compatible = "xlnx,axi-vdma-1.00.a"; interrupt-parent = ; interrupts = ; reg = ; xlnx,addrwidth = ; xlnx,flush-fsync = ; xlnx,num-fstores = ; dma-channel@43000000 { compatible = "xlnx,axi-vdma-mm2s-channel"; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,genlock-mode ; xlnx,include-dre ; }; dma-channel@43000030 { compatible = "xlnx,axi-vdma-s2mm-channel"; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,genlock-mode ; xlnx,include-dre ; }; }; AXI MCDMA axi_mcdma_0: axi_mcdma@a4040000 { #dma-cells = ; clock-names = "s_axi_aclk", "s_axi_lite_aclk"; clocks = , ; compatible = "xlnx,axi-mcdma-1.1", "xlnx,axi-mcdma-1.00.a"; interrupt-names = "mm2s_ch1_introut", "mm2s_ch2_introut", "mm2s_ch3_introut", "mm2s_ch4_introut", "mm2s_ch5_introut", "mm2s_ch6_introut", "mm2s_ch7_introut", "mm2s_ch8_introut", "s2mm_ch1_introut", "s2mm_ch2_introut", "s2mm_ch3_introut", "s2mm_ch4_introut", "s2mm_ch5_introut", "s2mm_ch6_introut", "s2mm_ch7_introut", "s2mm_ch8_introut"; interrupt-parent = ; interrupts = ; reg = ; xlnx,addrwidth = ; xlnx,dlytmr-resolution = ; xlnx,enable-single-intr = ; xlnx,group1-mm2s = ; xlnx,group1-s2mm = ; xlnx,group2-mm2s = ; xlnx,group2-s2mm = ; xlnx,group3-mm2s = ; xlnx,group3-s2mm = ; xlnx,group4-mm2s = ; xlnx,group4-s2mm = ; xlnx,group5-mm2s = ; xlnx,group5-s2mm = ; xlnx,group6-mm2s = ; xlnx,group6-s2mm = ; xlnx,include-mm2s = ; xlnx,include-mm2s-dre = ; xlnx,include-mm2s-sf = ; xlnx,include-s2mm = ; xlnx,include-s2mm-dre = ; xlnx,include-s2mm-sf = ; xlnx,include-sg ; xlnx,mm2s-burst-size = ; xlnx,mm2s-scheduler = ; xlnx,num-mm2s-channels = ; xlnx,num-s2mm-channels = ; xlnx,prmry-is-aclk-async = ; xlnx,s2mm-burst-size = ; xlnx,sg-include-stscntrl-strm = ; xlnx,sg-length-width = ; xlnx,sg-use-stsapp-length = ; dma-channel@a4040000 { compatible = "xlnx,axi-dma-mm2s-channel"; dma-channels = ; interrupt-parent = ; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,include-dre ; }; dma-channel@a4040030 { compatible = "xlnx,axi-dma-s2mm-channel"; dma-channels = ; interrupt-parent = ; interrupts = ; xlnx,datawidth = ; xlnx,device-id = ; xlnx,include-dre ; }; };

Test Procedure

AXI DMA and MCDMAA separate test case is provided to test the functionality of IP which assumes the IP streaming interfaces are connected back-to-back in the HW design.The test client is designed to transfer the data on the streaming interface (MM2S) and compares the data received on another interface (S2MM).This test client is available in the Linux source at, https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/axidmatest.c

NOTE: In ZynqMP vivado design it is mandatory to enable high address=1 (Zynq Ultrascale+ MPSoC customization-> PS-PL configuration-> Address Fragmentation-> High Address)

and set AXI DMA/AXI MCMDA address width=40 bits. For detail please refer: http://www.wiki.xilinx.com/PL+Masters

The test client can be configured as loadable or in-built kernel module.

Device-tree Node for the axidma test client

axidmatest_1: axidmatest@1 { compatible ="xlnx,axi-dma-test-1.00.a"; dmas = ; dma-names = "axidma0", "axidma1"; } ; Device-tree Node for the aximcdma test client axidmatest_1: axidmatest@1 { compatible ="xlnx,axi-dma-test-1.00.a"; dmas = ; dma-names = "axidma0", "axidma1"; } ; NOTE:  For MCMDA, MM2S channel(write/tx) ID start from '0' and is in [0-15] range. S2MM channel(read/rx) ID start from '16' and is in [16-31] range. These channels ID are fixed irrespective of IP configuration.Running the test client will display the message when the test is successful, dmatest: Started 1 threads using dma0chan0 dma0chan1 dma0chan0-dma0c: terminating after 5 tests, 0 failures (status 0) AXI CDMAGeneric kernel dmatest client is used to test the functionality of IP which reads the data from one location of memory and compare the data after copying data to other location of memory. This test client is available in the Linux source at, https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/dmatest.cFor documentation refer to https://www.kernel.org/doc/html/latest/driver-api/dmaengine/dmatest.htmlThe test client can be configured as loadable or in-built kernel module.Running the dmatest client will display the message when the test is successful, echo 1 > /sys/module/dmatest/parameters/verbose echo dma1chan0 > /sys/module/dmatest/parameters/channel echo 2000 > /sys/module/dmatest/parameters/timeout echo 1 > /sys/module/dmatest/parameters/iterations echo 1 > /sys/module/dmatest/parameters/run [ 359.611486] dmatest: Started 1 threads using dma1chan0 [ 359.617245] dmatest: dma1chan0-copy0: result #1: 'test passed' with src_off=0x830 dst_off=0x368 len=0x3560 (0) [ 359.629924] dmatest: dma1chan0-copy0: summary 1 tests, 0 failures 77 iops 1002 KB/s (0) AXI VDMAA separate test case is provided to test the functionality of IP which assumes the IP streaming interfaces are connected back-to-back in the HW design. The test client is designed to transfer the data on the streaming interface (MM2S) and compares the data received on another interface (S2MM).This test client is available in the Linux source at, https://github.com/Xilinx/linux-xlnx/blob/master/drivers/dma/xilinx/vdmatest.c

The test client can be configured as loadable or in-built kernel module.

Device-tree Node for the test client

vdmatest_1: vdmatest@1 { compatible ="xlnx,axi-vdma-test-1.00.a"; xlnx,num-fstores = ; dmas = ; dma-names = "vdma0", "vdma1"; } ; Running the test client will display the message when the test is successful, vdmatest: Started 1 threads using dma0chan0 dma0chan1 dma0chan0-dma0c: terminating after 1 tests, 0 failures (status 0) Mainline StatusThe current driver available in the Xilinx Linux git is in sync with the open source kernel driver except for the followingDMA Client driver (axidmatest and vdmatest - these are xilinx specific dma client driver and not streamable)Change Log

2023.1

Error path handling and kernel doc fix

https://github.com/Xilinx/linux-xlnx/commits/xilinx-v2023.1/drivers/dma/xilinx/xilinx_dma.c

2022.2

No changes

2022.1

Bugfix for IRQ mapping errors to allow deferred probe.

https://github.com/Xilinx/linux-xlnx/commits/master/drivers/dma/xilinx/xilinx_dma.c

2021.2

Update DMA mask for high memory access.Documentation and smatch warning fixes.

https://github.com/Xilinx/linux-xlnx/commits/xlnx_rebase_v5.10_2021.2/drivers/dma/xilinx/xilinx_dma.c

2021.1

5.10 kernel rebaseTrivial coverity fixes.

cf3db08cef1b dmaengine: xilinx_dma: Typecast with enum to fix the coverity warninge7007bc989ff dmaengine: xilinx_dma: Modify variable type to fix the incompatible warningc1d6cd7cde88 dmaengine: xilinx_dma: Add condition to check return value

2020.2

MCDMA fixes (SG capability, usage of xilinx_aximcdma_tx_segment) Add missing check for empty list in xilinx_dma_tx_statususe readl_poll_timeout_atomic variant

7a34a475cf62 dmaengine: xilinx_dma: Fix SG capability check for MCDMAa44799713f85 dmaengine: xilinx_dma: Fix usage of xilinx_aximcdma_tx_segment2aefe65f2861 dmaengine: xilinx_dma: Add missing check for empty listd2798ec3512e dmaengine: xilinx_dma: use readl_poll_timeout_atomic variant

2020.1

Fix dma channel node order dependency.MCDMA IP support.5.4 kernel upgrade (Merge tag 'dmaengine-5.3-rc1' of slave tree + xilinx tree rebase patches)

Commits:

b6848e6 dmaengine: xilinx_dma: In dma channel probe fix node order dependency87e34b2 dmaengine: xilinx_dma: Extend dma_config structure to store max channel count9fef941 dmaengine: xilinx_dma: Add Xilinx AXI MCDMA Engine driver support47ebe00 Merge tag 'dmaengine-5.3-rc1' of slave tree

2019.2

Fix SG internal error in cdma prep_dma_sg mode.Clear desc_pendingcount in xilinx_dma_reset.Check for both idle and halted state in axidma stop.Residue calculation and reporting.Remove cdmatest client. Now onwards we have to use generic dmatest client for cdma validation.

Commits:

552d3f1 dmaengine: xilinx_dma: Fix SG internal error in cdma prep_dma_sg modee4a9ef8 dmaengine: xilinx: Clear desc_pendingcount in xilinx_dma_reset136cd70 dmaengine: xilinx: Check for both idle and halted state in axidma stop_transfer478500b dmaengine: xilinx: Print debug message when no free tx segments8eab5a1 dmaengine: xilinx: Remove residue from channel data0f7b82f dmaengine: xilinx: Add callback_result supportbc6a6ab dmaengine: xilinx: Introduce xilinx_dma_get_residue41176b9 dmaengine: xilinx: Merge get_callback and _invoke976bab6 dmaengine: xilinx_dma: Remove desc_callback_valid check

2019.1

Remove axidma multi-channel mode supportFix 64-bit simple AXIDMA transferFix control reg update in vdma_channel_set_config 

Commits:

8c8e3b1 dmaengine: xilinx_dma: Remove axidma multi-channel mode supportc3b6c45 dmaengine: xilinx_dma: Fix 64-bit simple AXIDMA transfer965442b dmaengine: xilinx_dma: Introduce helper macro for preparing dma address fbde9af dmaengine: xilinx_dma: Fix control reg update in vdma_channel_set_config 

2018.3

Reset DMA channel in dma_terminate_all.Fix 64-bit simple CDMA transfer.Code refactoring.

Commits:

1c8b3af dmaengine: xilinx_dma: Reset DMA channel in dma_terminate_allcf9dfe6 dmaengine: xilinx_dma: Minor refactoring44b796e dmaengine: xilinx_dma: Fix 64-bit simple CDMA transfer113e03d dmaengine: xilinx_dma: Move enum xdma_ip_type to driver file55ea663 dmaengine: xilinx_dma: Fix typos

2018.2Summary:Add support for 64MB data transfer.Commits:f479cb5 dmaengine: xilinx: dma: In axidma add support for 64MB data transfer2018.1Summary:Upgrade to 4.14 kernel.Trivial code cleanup i.e Refactor axidma channel allocationFree BD consistent memory in channel free_chan_resources.Fix DMA idle state on terminate_all.Enable VDMA S2MM vertical flip support.Add support for memory sg transactions for CDMA.In AXIDMA program hardware supported buffer length.Commits:818f168 Merge tag 'v4.14' into master62515d5 dma: xilinx: xilinx_dma: Refactor axidma channel allocationb0d0ec6 dma: xilinx: xilinx_dma: Free BD consistent memorya9aeecb dma: xilinx: making dma state as idle on terminating all1eb7c59 dmaengine: xilinx: dma: Enable VDMA S2MM vertical flip supportd5b6e8d dma: xilinx: xilinx_dma: Move open brace '{' to function definition next line2eee108 dma: xilinx: xilinx_dma: Document functions return valueff238b0 dma: xilinx: Add support for memory sg transactions for cdma86b2c03 vdmatest: xilinx: Add hsize and vsize module parameterb78597b vdmatest: xilinx: Fix VDMA hang reported in certain resolutions01a61a2 vdmatest: xilinx: Use octal permissions '0444'dcee02c dmaengine: xilinx: dma: Program hardware supported buffer length

2017.4Summary:

Added support for memory sg transactions for cdmaFixed race conditions in the driver for cdmaDifferentiate probe based on IP type.Fix compiler warning.Commits:9e8f5fc dma: xilinx: Add support for memory sg transactions for cdmab3fe111 dma: xilinx: Fix race conditions in the driver for cdma.61a18fd dma: xilinx: Differentiate probe based on the IP type.322bd63dma: xilinx: xilinx_dma: Fix compilation warning.

2017.3Summary:

Fix issues with dma_get_slave_caps API for AXI DMA configuration.Fix issues with vdma mulit fstore configuration.Commits:ed2ee32 dma: xilinx: Fix issues with vdma mulit fstore configuration54c8b75dma: xilinx: Fix dma_get_slave_caps gaps

2017.2

None2017.1Summary:Add idle checks across the driver for all the DMA's (AXI DMA/CDMA/VDMA) before submitting the descriptor.Fix bug in multiple frame stores scenario in vdmaFix race condition in the driver for multiple descriptor scenario for axidma.Commits:d4df1d5 dma: xilinx_dma: check for channel idle state before submitting the dma descriptor.05ce73d dma: xilinx_dma: Fix bug in multiple frame stores scenario in vdma3794829 dma: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario for axidma.

2016.4

None2016.3Summary:Mainlined the driverFixed the issues as per the commit IDDeleted the AXI DMA/CDMA driver and Merged the AXI DMA/CDMA code with the VDMA driverMerged all the 3 DMA's drivers into a single driverCommits:f4cd973 dma: xilinx: axidma: Fix race condition in the cyclic dma mode853502d vdma: sync driver with mainline52619f dma: xilinx: Delete AXI DMA driver97833b1 dma: xilinx: Delete AXI CDMA driverd78c414 dmaengine: vdma: Use dma_pool_zalloc7531bdc dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma0cc811a dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine90b6146 dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Acc… …ess Engine300f90b dmaengine: vdma: Add config structure to differentiate dmascc28fc1 dmaengine: vdma: Add clock support0717493 dmaengine: vdma: don't crash when bad channel is requested60c30ad dmaengine: vdma: Add support for cyclic dma moded0509b1 dmaengine: vdma: Use dma_pool_zallocd7cb73e dmaengine: vdma: Fix compilation warning in cyclic dma mode18ef650 dmaengine: vdma: Add 64 bit addressing support for the axi dmaab182b3 dmaengine: vdma: Add 64 bit addressing support for the axi cdmaaa32340 dmaengine: vdma: Add support for mulit-channel dma modefb5fb40 dmaengine: xilinx: Rename driver and configc41a863 dmaengine: xilinx: Use different channel names for each dmaebccb5e dmaengine: xilinx: Fix race condition in axi dma cyclic dma mode6300822 dma: xilinx: Update test clients depends config option8408c14 dma: xilinx: Check for channel idle state before submitting dma descr…

Related Links

AXI DMA Product Guide (PG021)AXI CDMA Product Guide (PG034)AXI VDMA Product Guide (PG020)Linux Drivers


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有